53 research outputs found

    OpenAIRE dashboard for repository managers: from repositories for repositories

    Get PDF
    Poster presented at "12th International Conference on Open Repositories" (OR 2017), Brisbane, Australia, 26-30 June 2017.OpenAIRE is the European Union initiative for an Open Access Infrastructure for Research which supports open scholarly communication and access to the research output of European funded projects and beyond. Thanks to infrastructure services, objects in the graph are harmonized to achieve semantic homogeneity, de-duplicated to avoid ambiguities, and enriched with missing properties and/or relationships. OpenAIRE data sources interested in enhancing or incrementing their content may benefit in a number of ways from this graph. This paper presents the OpenAIRE dashboard for data providers which performs the realization of an institutional repository Literature Broker Service for OpenAIRE data sources. The Service implements a subscription and notification paradigm supporting institutional repositories

    OpenAIRE-Connect: Open Science as a Service for repositories and research communities

    Get PDF
    Communication presented at "12th International Conference on Open Repositories" (OR 2017), Brisbane, Australia, 26-30 June 2017.OpenAIRE-Connect fosters transparent evaluation of results and facilitates reproducibility of science for research communities by enabling a scientific communication ecosystem supporting exchange of artefacts, packages of artefacts, and links between them across communities and across content providers. To this aim, OpenAIRE-Connect will introduce and implement the concept of Open Science as a Service (OSaaS) on top of the existing OpenAIRE infrastructure1, by delivering out-of-the-box, on-demand deployable tools in support of Open Science. OpenAIRE-Connect will realize and operate two OSaaS services. The first will serve research communities to (i) publish research artefacts (packages and links), and (ii) monitor their research impact. The second will engage and mobilize content providers, and serve them with services enabling notification-based exchange of research artefacts, to leverage their transition towards Open Science paradigms. Both services will be served on-demand according to the OSaaS approach, hence be re-usable by different disciplines and providers, each with different practices and maturity levels, so as to favor a shift towards a uniform cross-community and cross-content provider scientific communication ecosystem

    SCINOBO: a novel system classifying scholarly communication in a dynamically constructed hierarchical Field-of-Science taxonomy

    Get PDF
    Classifying scientific publications according to Field-of-Science taxonomies is of crucial importance, powering a wealth of relevant applications including Search Engines, Tools for Scientific Literature, Recommendation Systems, and Science Monitoring. Furthermore, it allows funders, publishers, scholars, companies, and other stakeholders to organize scientific literature more effectively, calculate impact indicators along Science Impact pathways and identify emerging topics that can also facilitate Science, Technology, and Innovation policy-making. As a result, existing classification schemes for scientific publications underpin a large area of research evaluation with several classification schemes currently in use. However, many existing schemes are domain-specific, comprised of few levels of granularity, and require continuous manual work, making it hard to follow the rapidly evolving landscape of science as new research topics emerge. Based on our previous work of scinobo, which incorporates metadata and graph-based publication bibliometric information to assign Field-of-Science fields to scientific publications, we propose a novel hybrid approach by further employing Neural Topic Modeling and Community Detection techniques to dynamically construct a Field-of-Science taxonomy used as the backbone in automatic publication-level Field-of-Science classifiers. Our proposed Field-of-Science taxonomy is based on the OECD fields of research and development (FORD) classification, developed in the framework of the Frascati Manual containing knowledge domains in broad (first level(L1), one-digit) and narrower (second level(L2), two-digit) levels. We create a 3-level hierarchical taxonomy by manually linking Field-of-Science fields of the sciencemetrix Journal classification to the OECD/FORD level-2 fields. To facilitate a more fine-grained analysis, we extend the aforementioned Field-of-Science taxonomy to level-4 and level-5 fields by employing a pipeline of AI techniques. We evaluate the coherence and the coverage of the Field-of-Science fields for the two additional levels based on synthesis scientific publications in two case studies, in the knowledge domains of Energy and Artificial Intelligence. Our results showcase that the proposed automatically generated Field-of-Science taxonomy captures the dynamics of the two research areas encompassing the underlying structure and the emerging scientific developments

    On Constructing Repository Infrastructures: The D-NET Software Toolkit

    Get PDF
    Due to the wide diffusion of digital repositories, organizations responsible for large research communities, such as national or project consortia, research institutions, foundations, are increasingly tempted into setting up so-called repository infrastructure systems (e.g., OAIster (http://www.oaister.org), BASE (http://www.base-search.net), DAREnet-NARCIS (http://www.narcis.info)). Such systems offer web portals, services and APIs for cross-operating over the metadata records of publications (lately also of experimental data and compound objects) aggregated from a set of repositories. Generally, they consist of two connected tiers: an aggregation system for populating an information space of metadata records by harvesting and transforming (e.g., cleaning, enriching) records from a set of OAI-PMH compatible data sources, typically repositories; and a web portal, providing end-users with advanced functionality over such information space (search, browsing, annotations, recommendations, collections, user profiling, etc). Typically, information spaces also offer access to third-party applications through standard APIs (e.g., OAI-PMH, SRW, OAI-ORE). Repository infrastructure systems address similar architectural and functional issues across several disciplines and application domains. On the one hand they all deal, with more or less contingent complexity, with the generic problem of harvesting metadata records of a given format, transform them into records of a target format and deliver web portals to operate over these records. On the other hand, they have to cope with arbitrary numbers of repositories, hence administering them, from automatic scheduling of harvesting and transformation actions, definition of relative transformation mappings, to the inherent scalability problems of coping with ever growing incoming records. Existing solutions tend to privilege customization of software, neglecting general-purpose approaches. Typically, for example, aggregation systems are designed to generate metadata records of a format X from records of format Y, and not be parametric with respect to such formats. Similarly, the participation of a repository to an infrastructure is driven by firm policies and administrators often do not have the freedom of specifying their own workflow, by combining as they prefer logical steps such as harvesting, storing, transforming, indexing and validating. In summary, repository infrastructure systems typically provide advanced and effective solutions tailored to the one scenario of interest, while can hardly be applicable to different scenarios, where similar but distinct requirements apply. As a consequence, an organization willing to set up a repository infrastructure system with peculiar requirements has to face the "expensive" problem of designing and developing a new software from scratch. In this paper, we present a general-purpose and cost-efficient solution for the construction of customized repository infrastructures, based on the D-NET Software Toolkit (www.d-net.research-infrastructures.eu), developed in the context of the DRIVER and DRIVER-II projects (http://www.driver-community.eu). D-NET offers a service-oriented framework, whose services can be combined by developers to easily construct customized aggregation systems and personalized web portals. D-NET services can be customized, extended and combined to match domain specific scenarios, while distribution, sharing and orchestration of services enables the construction of scalable and robust repository infrastructures. As we shall describe in the following, D-NET is currently the enabling software of a number of European projects and national initiatives

    On Constructing Repository Infrastructures: The D-NET Software Toolkit

    Get PDF
    Due to the wide diffusion of digital repositories, organizations responsible for large research communities, such as national or project consortia, research institutions, foundations, are increasingly tempted into setting up so-called repository infrastructure systems (e.g., OAIster (http://www.oaister.org), BASE (http://www.base-search.net), DAREnet-NARCIS (http://www.narcis.info)). Such systems offer web portals, services and APIs for cross-operating over the metadata records of publications (lately also of experimental data and compound objects) aggregated from a set of repositories. Generally, they consist of two connected tiers: an aggregation system for populating an information space of metadata records by harvesting and transforming (e.g., cleaning, enriching) records from a set of OAI-PMH compatible data sources, typically repositories; and a web portal, providing end-users with advanced functionality over such information space (search, browsing, annotations, recommendations, collections, user profiling, etc). Typically, information spaces also offer access to third-party applications through standard APIs (e.g., OAI-PMH, SRW, OAI-ORE). Repository infrastructure systems address similar architectural and functional issues across several disciplines and application domains. On the one hand they all deal, with more or less contingent complexity, with the generic problem of harvesting metadata records of a given format, transform them into records of a target format and deliver web portals to operate over these records. On the other hand, they have to cope with arbitrary numbers of repositories, hence administering them, from automatic scheduling of harvesting and transformation actions, definition of relative transformation mappings, to the inherent scalability problems of coping with ever growing incoming records. Existing solutions tend to privilege customization of software, neglecting general-purpose approaches. Typically, for example, aggregation systems are designed to generate metadata records of a format X from records of format Y, and not be parametric with respect to such formats. Similarly, the participation of a repository to an infrastructure is driven by firm policies and administrators often do not have the freedom of specifying their own workflow, by combining as they prefer logical steps such as harvesting, storing, transforming, indexing and validating. In summary, repository infrastructure systems typically provide advanced and effective solutions tailored to the one scenario of interest, while can hardly be applicable to different scenarios, where similar but distinct requirements apply. As a consequence, an organization willing to set up a repository infrastructure system with peculiar requirements has to face the "expensive" problem of designing and developing a new software from scratch. In this paper, we present a general-purpose and cost-efficient solution for the construction of customized repository infrastructures, based on the D-NET Software Toolkit (www.d-net.research-infrastructures.eu), developed in the context of the DRIVER and DRIVER-II projects (http://www.driver-community.eu). D-NET offers a service-oriented framework, whose services can be combined by developers to easily construct customized aggregation systems and personalized web portals. D-NET services can be customized, extended and combined to match domain specific scenarios, while distribution, sharing and orchestration of services enables the construction of scalable and robust repository infrastructures. As we shall describe in the following, D-NET is currently the enabling software of a number of European projects and national initiatives

    Profiling Attitudes for Personalized Information Provision

    Get PDF
    PAROS is a generic system under design whose goal is to offer personalization, recommendation, and other adaptation services to information providing systems. In its heart lies a rich user model able to capture several diverse aspects of user behavior, interests, preferences, and other attitudes. The user model is instantiated with profiles of users, which are obtained by analyzing and appropriately interpreting potentially arbitrary pieces of user-relevant information coming from diverse sources. These profiles are maintained by the system, updated incrementally as additional data on users becomes available, and used by a variety of information systems to adapt the functionality to the users’ characteristics

    OpenMinTeD: A Platform Facilitating Text Mining of Scholarly Content

    Get PDF
    The OpenMinTeD platform aims to bring full text Open Access scholarly content from a wide range of providers together with Text and Data Mining (TDM) tools from various Natural Language Processing frameworks and TDM developers in an integrated environment. In this way, it supports users who want to mine scientific literature with easy access to relevant content and allows running scalable TDM workflows in the cloud

    Up-regulation of the monocyte chemotactic protein-3 in sera from bone marrow transplanted children with torquetenovirus infection

    Get PDF
    Torquetenovirus (TTV) represents a commensal human virus producing life-long viremia in approximately 80% of healthy individuals of all ages. A potential pathogenic role for TTV has been suggested in immunocompromised patients with hepatitis of unknown etiology sustained by strong proinflammatory cytokines

    Monitoring the open access policy of Horizon 2020

    Get PDF
    This study is framed within the context of the contract ‘Monitoring the open access policy of Horizon 2020 – RTD/2019/SC/021’, reporting an authoritative set of metrics for compliance with the European Commission open access mandate within the Framework Programme thus far, and providing advice on how to systematically monitor compliance in the future. Open access requirements for publications under Horizon 2020 are set out in Article 29.2 of the Horizon 2020 Model Grant Agreement (MGA). Regarding open access to research data, the Commission is conducting the Horizon 2020 Open Research Data Pilot (ORDP). The ORDP aims to improve and maximise access to, and reuse of, research data generated by Horizon 2020 projects, balancing the need for openness with the protection of intellectual rights, privacy concerns and security, and commercialisation, as well as questions of data management and preservation. The present study aims to examine, monitor and quantify compliance with the open access requirements of the MGA, for both publications and research data. The study concludes with specific recommendations to improve the monitoring of compliance with the policy under Horizon Europe, together with an assessment of the efficiency and effectiveness of the Horizon 2020 open access policy. The key findings of this study indicate that the European Commission’s leadership in the Open Science policy has paid off. Compliance has steadily increased over recent years, achieving a success rate that places the European Commission at the forefront globally (83% open access to scientific publications). What is also apparent from the study is that monitoring – particularly with regard to the specific terms of the policy – cannot be achieved by self-reporting alone, or without the European Commission collaborating closely with other funding agencies across Europe and beyond, to agree on common standards and the common elements of the underlying infrastructure. In particular, the European Open Science Cloud (EOSC) should encompass all such components that are needed to foster a linked ecosystem, in which information is exchanged on demand and which eases the process for both researchers (who only need to deposit once) and funders (who need only record information once)

    A conceptual model for building publishing services on top of a distributed network of repositories

    Get PDF
    Green and gold are artificial distinctions, mainly driven by the technical and business separation between publishing platforms and institutional and thematic repositories operated by research institutions or research communities. Building on two efforts that were presented last year at Open Repositories in 2018, (1) Next Generation Repositories functionalities and (2) prospects for greater integration between repositories and journal publishing platforms, we have developed a conceptual model for publishing overlay services on top of distributed repository platforms. The model removes the green and gold dichotomy by providing a system that integrates publishing capabilities with repository capabilities, thereby combining the strength of the two worlds, while building out an integrative and interoperable infrastructure for scholarly communication.info:eu-repo/semantics/publishedVersio
    • 

    corecore